Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
J Emerg Med ; 66(2): 184-191, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38369413

RESUMO

BACKGROUND: The adoption of point-of-care ultrasound (POCUS) has greatly improved the ability to rapidly evaluate unstable emergency department (ED) patients at the bedside. One major use of POCUS is to obtain echocardiograms to assess cardiac function. OBJECTIVES: We developed EchoNet-POCUS, a novel deep learning system, to aid emergency physicians (EPs) in interpreting POCUS echocardiograms and to reduce operator-to-operator variability. METHODS: We collected a new dataset of POCUS echocardiogram videos obtained in the ED by EPs and annotated the cardiac function and quality of each video. Using this dataset, we train EchoNet-POCUS to evaluate both cardiac function and video quality in POCUS echocardiograms. RESULTS: EchoNet-POCUS achieves an area under the receiver operating characteristic curve (AUROC) of 0.92 (0.89-0.94) for predicting whether cardiac function is abnormal and an AUROC of 0.81 (0.78-0.85) for predicting video quality. CONCLUSIONS: EchoNet-POCUS can be applied to bedside echocardiogram videos in real time using commodity hardware, as we demonstrate in a prospective pilot study.


Assuntos
Ecocardiografia , Sistemas Automatizados de Assistência Junto ao Leito , Humanos , Estudos Prospectivos , Projetos Piloto , Ultrassonografia , Serviço Hospitalar de Emergência
2.
Sci Rep ; 14(1): 11, 2024 01 02.
Artigo em Inglês | MEDLINE | ID: mdl-38167849

RESUMO

Transesophageal echocardiography (TEE) imaging is a vital tool used in the evaluation of complex cardiac pathology and the management of cardiac surgery patients. A key limitation to the application of deep learning strategies to intraoperative and intraprocedural TEE data is the complexity and unstructured nature of these images. In the present study, we developed a deep learning-based, multi-category TEE view classification model that can be used to add structure to intraoperative and intraprocedural TEE imaging data. More specifically, we trained a convolutional neural network (CNN) to predict standardized TEE views using labeled intraoperative and intraprocedural TEE videos from Cedars-Sinai Medical Center (CSMC). We externally validated our model on intraoperative TEE videos from Stanford University Medical Center (SUMC). Accuracy of our model was high across all labeled views. The highest performance was achieved for the Trans-Gastric Left Ventricular Short Axis View (area under the receiver operating curve [AUC] = 0.971 at CSMC, 0.957 at SUMC), the Mid-Esophageal Long Axis View (AUC = 0.954 at CSMC, 0.905 at SUMC), the Mid-Esophageal Aortic Valve Short Axis View (AUC = 0.946 at CSMC, 0.898 at SUMC), and the Mid-Esophageal 4-Chamber View (AUC = 0.939 at CSMC, 0.902 at SUMC). Ultimately, we demonstrate that our deep learning model can accurately classify standardized TEE views, which will facilitate further downstream deep learning analyses for intraoperative and intraprocedural TEE imaging.


Assuntos
Procedimentos Cirúrgicos Cardíacos , Aprendizado Profundo , Humanos , Ecocardiografia Transesofagiana/métodos , Ecocardiografia/métodos , Valva Aórtica
3.
Lancet Digit Health ; 6(1): e70-e78, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38065778

RESUMO

BACKGROUND: Preoperative risk assessments used in clinical practice are insufficient in their ability to identify risk for postoperative mortality. Deep-learning analysis of electrocardiography can identify hidden risk markers that can help to prognosticate postoperative mortality. We aimed to develop a prognostic model that accurately predicts postoperative mortality in patients undergoing medical procedures and who had received preoperative electrocardiographic diagnostic testing. METHODS: In a derivation cohort of preoperative patients with available electrocardiograms (ECGs) from Cedars-Sinai Medical Center (Los Angeles, CA, USA) between Jan 1, 2015 and Dec 31, 2019, a deep-learning algorithm was developed to leverage waveform signals to discriminate postoperative mortality. We randomly split patients (8:1:1) into subsets for training, internal validation, and final algorithm test analyses. Model performance was assessed using area under the receiver operating characteristic curve (AUC) values in the hold-out test dataset and in two external hospital cohorts and compared with the established Revised Cardiac Risk Index (RCRI) score. The primary outcome was post-procedural mortality across three health-care systems. FINDINGS: 45 969 patients had a complete ECG waveform image available for at least one 12-lead ECG performed within the 30 days before the procedure date (59 975 inpatient procedures and 112 794 ECGs): 36 839 patients in the training dataset, 4549 in the internal validation dataset, and 4581 in the internal test dataset. In the held-out internal test cohort, the algorithm discriminates mortality with an AUC value of 0·83 (95% CI 0·79-0·87), surpassing the discrimination of the RCRI score with an AUC of 0·67 (0·61-0·72). The algorithm similarly discriminated risk for mortality in two independent US health-care systems, with AUCs of 0·79 (0·75-0·83) and 0·75 (0·74-0·76), respectively. Patients determined to be high risk by the deep-learning model had an unadjusted odds ratio (OR) of 8·83 (5·57-13·20) for postoperative mortality compared with an unadjusted OR of 2·08 (0·77-3·50) for postoperative mortality for RCRI scores of more than 2. The deep-learning algorithm performed similarly for patients undergoing cardiac surgery (AUC 0·85 [0·77-0·92]), non-cardiac surgery (AUC 0·83 [0·79-0·88]), and catheterisation or endoscopy suite procedures (AUC 0·76 [0·72-0·81]). INTERPRETATION: A deep-learning algorithm interpreting preoperative ECGs can improve discrimination of postoperative mortality. The deep-learning algorithm worked equally well for risk stratification of cardiac surgeries, non-cardiac surgeries, and catheterisation laboratory procedures, and was validated in three independent health-care systems. This algorithm can provide additional information to clinicians making the decision to perform medical procedures and stratify the risk of future complications. FUNDING: National Heart, Lung, and Blood Institute.


Assuntos
Aprendizado Profundo , Humanos , Medição de Risco/métodos , Algoritmos , Prognóstico , Eletrocardiografia
4.
Nature ; 616(7957): 520-524, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-37020027

RESUMO

Artificial intelligence (AI) has been developed for echocardiography1-3, although it has not yet been tested with blinding and randomization. Here we designed a blinded, randomized non-inferiority clinical trial (ClinicalTrials.gov ID: NCT05140642; no outside funding) of AI versus sonographer initial assessment of left ventricular ejection fraction (LVEF) to evaluate the impact of AI in the interpretation workflow. The primary end point was the change in the LVEF between initial AI or sonographer assessment and final cardiologist assessment, evaluated by the proportion of studies with substantial change (more than 5% change). From 3,769 echocardiographic studies screened, 274 studies were excluded owing to poor image quality. The proportion of studies substantially changed was 16.8% in the AI group and 27.2% in the sonographer group (difference of -10.4%, 95% confidence interval: -13.2% to -7.7%, P < 0.001 for non-inferiority, P < 0.001 for superiority). The mean absolute difference between final cardiologist assessment and independent previous cardiologist assessment was 6.29% in the AI group and 7.23% in the sonographer group (difference of -0.96%, 95% confidence interval: -1.34% to -0.54%, P < 0.001 for superiority). The AI-guided workflow saved time for both sonographers and cardiologists, and cardiologists were not able to distinguish between the initial assessments by AI versus the sonographer (blinding index of 0.088). For patients undergoing echocardiographic quantification of cardiac function, initial assessment of LVEF by AI was non-inferior to assessment by sonographers.


Assuntos
Inteligência Artificial , Cardiologistas , Ecocardiografia , Testes de Função Cardíaca , Humanos , Inteligência Artificial/normas , Ecocardiografia/métodos , Ecocardiografia/normas , Volume Sistólico , Função Ventricular Esquerda , Método Simples-Cego , Fluxo de Trabalho , Reprodutibilidade dos Testes , Testes de Função Cardíaca/métodos , Testes de Função Cardíaca/normas
5.
J Am Soc Echocardiogr ; 36(5): 482-489, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36754100

RESUMO

BACKGROUND: Significant interobserver and interstudy variability occurs for left ventricular (LV) functional indices despite standardization of measurement techniques. Artificial intelligence models trained on adult echocardiograms are not likely to be applicable to a pediatric population. We present EchoNet-Peds, a video-based deep learning algorithm, which matches human expert performance of LV segmentation and ejection fraction (EF). METHODS: A large pediatric data set of 4,467 echocardiograms was used to develop EchoNet-Peds. EchoNet-Peds was trained on 80% of the data for segmentation of the left ventricle and estimation of LVEF. The remaining 20% was used to fine-tune and validate the algorithm. RESULTS: In both apical 4-chamber and parasternal short-axis views, EchoNet-Peds segments the left ventricle with a Dice similarity coefficient of 0.89. EchoNet-Peds estimates EF with a mean absolute error of 3.66% and can routinely identify pediatric patients with systolic dysfunction (area under the curve of 0.95). EchoNet-Peds was trained on pediatric echocardiograms and performed significantly better to estimate EF (P < .001) than an adult model applied to the same data. CONCLUSIONS: Accurate, rapid automation of EF assessment and recognition of systolic dysfunction in a pediatric population are feasible using EchoNet-Peds with the potential for far-reaching clinical impact. In addition, the first large pediatric data set of annotated echocardiograms is now publicly available for efforts to develop pediatric-specific artificial intelligence algorithms.


Assuntos
Aprendizado Profundo , Disfunção Ventricular Esquerda , Adulto , Humanos , Criança , Função Ventricular Esquerda , Volume Sistólico , Inteligência Artificial , Ecocardiografia/métodos , Disfunção Ventricular Esquerda/diagnóstico por imagem
6.
NPJ Digit Med ; 5(1): 188, 2022 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-36550271

RESUMO

Deep learning has been shown to accurately assess "hidden" phenotypes from medical imaging beyond traditional clinician interpretation. Using large echocardiography datasets from two healthcare systems, we test whether it is possible to predict age, race, and sex from cardiac ultrasound images using deep learning algorithms and assess the impact of varying confounding variables. Using a total of 433,469 videos from Cedars-Sinai Medical Center and 99,909 videos from Stanford Medical Center, we trained video-based convolutional neural networks to predict age, sex, and race. We found that deep learning models were able to identify age and sex, while unable to reliably predict race. Without considering confounding differences between categories, the AI model predicted sex with an AUC of 0.85 (95% CI 0.84-0.86), age with a mean absolute error of 9.12 years (95% CI 9.00-9.25), and race with AUCs ranging from 0.63 to 0.71. When predicting race, we show that tuning the proportion of confounding variables (age or sex) in the training data significantly impacts model AUC (ranging from 0.53 to 0.85), while sex and age prediction was not particularly impacted by adjusting race proportion in the training dataset AUC of 0.81-0.83 and 0.80-0.84, respectively. This suggests significant proportion of AI's performance on predicting race could come from confounding features being detected. Further work remains to identify the particular imaging features that associate with demographic information and to better understand the risks of demographic identification in medical AI as it pertains to potentially perpetuating bias and disparities.

7.
Cell Rep Methods ; 2(4): 100191, 2022 04 25.
Artigo em Inglês | MEDLINE | ID: mdl-35497493

RESUMO

We develop a deep learning approach, in silico immunohistochemistry (IHC), which takes routinely collected histochemical-stained samples as input and computationally generates virtual IHC slide images. We apply in silico IHC to Alzheimer's disease samples, where several hallmark changes are conventionally identified using IHC staining across many regions of the brain. In silico IHC computationally identifies neurofibrillary tangles, ß-amyloid plaques, and neuritic plaques at a high spatial resolution directly from the histochemical images, with areas under the receiver operating characteristic curve of between 0.88 and 0.92. In silico IHC learns to identify subtle cellular morphologies associated with these lesions and can generate in silico IHC slides that capture key features of the actual IHC.


Assuntos
Doença de Alzheimer , Humanos , Doença de Alzheimer/diagnóstico , Emaranhados Neurofibrilares/metabolismo , Peptídeos beta-Amiloides/metabolismo , Imuno-Histoquímica , Encéfalo/diagnóstico por imagem , Placa Amiloide/patologia
8.
JAMA Cardiol ; 7(4): 386-395, 2022 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-35195663

RESUMO

IMPORTANCE: Early detection and characterization of increased left ventricular (LV) wall thickness can markedly impact patient care but is limited by under-recognition of hypertrophy, measurement error and variability, and difficulty differentiating causes of increased wall thickness, such as hypertrophy, cardiomyopathy, and cardiac amyloidosis. OBJECTIVE: To assess the accuracy of a deep learning workflow in quantifying ventricular hypertrophy and predicting the cause of increased LV wall thickness. DESIGN, SETTINGS, AND PARTICIPANTS: This cohort study included physician-curated cohorts from the Stanford Amyloid Center and Cedars-Sinai Medical Center (CSMC) Advanced Heart Disease Clinic for cardiac amyloidosis and the Stanford Center for Inherited Cardiovascular Disease and the CSMC Hypertrophic Cardiomyopathy Clinic for hypertrophic cardiomyopathy from January 1, 2008, to December 31, 2020. The deep learning algorithm was trained and tested on retrospectively obtained independent echocardiogram videos from Stanford Healthcare, CSMC, and the Unity Imaging Collaborative. MAIN OUTCOMES AND MEASURES: The main outcome was the accuracy of the deep learning algorithm in measuring left ventricular dimensions and identifying patients with increased LV wall thickness diagnosed with hypertrophic cardiomyopathy and cardiac amyloidosis. RESULTS: The study included 23 745 patients: 12 001 from Stanford Health Care (6509 [54.2%] female; mean [SD] age, 61.6 [17.4] years) and 1309 from CSMC (808 [61.7%] female; mean [SD] age, 62.8 [17.2] years) with parasternal long-axis videos and 8084 from Stanford Health Care (4201 [54.0%] female; mean [SD] age, 69.1 [16.8] years) and 2351 from CSMS (6509 [54.2%] female; mean [SD] age, 69.6 [14.7] years) with apical 4-chamber videos. The deep learning algorithm accurately measured intraventricular wall thickness (mean absolute error [MAE], 1.2 mm; 95% CI, 1.1-1.3 mm), LV diameter (MAE, 2.4 mm; 95% CI, 2.2-2.6 mm), and posterior wall thickness (MAE, 1.4 mm; 95% CI, 1.2-1.5 mm) and classified cardiac amyloidosis (area under the curve [AUC], 0.83) and hypertrophic cardiomyopathy (AUC, 0.98) separately from other causes of LV hypertrophy. In external data sets from independent domestic and international health care systems, the deep learning algorithm accurately quantified ventricular parameters (domestic: R2, 0.96; international: R2, 0.90). For the domestic data set, the MAE was 1.7 mm (95% CI, 1.6-1.8 mm) for intraventricular septum thickness, 3.8 mm (95% CI, 3.5-4.0 mm) for LV internal dimension, and 1.8 mm (95% CI, 1.7-2.0 mm) for LV posterior wall thickness. For the international data set, the MAE was 1.7 mm (95% CI, 1.5-2.0 mm) for intraventricular septum thickness, 2.9 mm (95% CI, 2.4-3.3 mm) for LV internal dimension, and 2.3 mm (95% CI, 1.9-2.7 mm) for LV posterior wall thickness. The deep learning algorithm accurately detected cardiac amyloidosis (AUC, 0.79) and hypertrophic cardiomyopathy (AUC, 0.89) in the domestic external validation site. CONCLUSIONS AND RELEVANCE: In this cohort study, the deep learning model accurately identified subtle changes in LV wall geometric measurements and the causes of hypertrophy. Unlike with human experts, the deep learning workflow is fully automated, allowing for reproducible, precise measurements, and may provide a foundation for precision diagnosis of cardiac hypertrophy.


Assuntos
Amiloidose , Cardiomiopatia Hipertrófica , Aprendizado Profundo , Idoso , Amiloidose/diagnóstico , Amiloidose/diagnóstico por imagem , Cardiomiopatia Hipertrófica/diagnóstico , Cardiomiopatia Hipertrófica/diagnóstico por imagem , Estudos de Coortes , Feminino , Humanos , Hipertrofia Ventricular Esquerda/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos
9.
Pac Symp Biocomput ; 27: 231-241, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34890152

RESUMO

As deep learning plays an increasing role in making medical decisions, explainability is playing an increasing role in satisfying regulatory requirements and facilitating trust and transparency in deep learning approaches. In cardiac imaging, the task of accurately assessing left-ventricular function is crucial for evaluating patient risk, diagnosing cardiovascular disease, and clinical decision making. Previous video based methods to predict ejection fraction yield high accuracy but at the expense of explainability and did not utilize the standard clinical workflow. More explainable methods that match the clinical workflow, using 2D semantic segmentation, have been explored but found to have lower accuracy. To simultaneously increase accuracy and utilize an approach that matches the standard clinical workflow, we propose a frame-by-frame 3D depth-map approach that is both accurate (mean absolute error of 6.5%) and explainable, utilizing the conventional clinical workflow with method of discs evaluation of left ventricular volume. This method is more reproducible than human evaluation and generates volume predictions that can be interpreted by clinicians and provide the opportunity to intervene and adjust the deep learning prediction.


Assuntos
Aprendizado Profundo , Biologia Computacional , Humanos , Fluxo de Trabalho
10.
Pac Symp Biocomput ; 27: 337-348, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34890161

RESUMO

Single-cell RNA sequencing (scRNA-seq) has the potential to provide powerful, high-resolution signatures to inform disease prognosis and precision medicine. This paper takes an important first step towards this goal by developing an interpretable machine learning algorithm, CloudPred, to predict individuals' disease phenotypes from their scRNA-seq data. Predicting phenotype from scRNA-seq is challenging for standard machine learning methods-the number of cells measured can vary by orders of magnitude across individuals and the cell populations are also highly heterogeneous. Typical analysis creates pseudo-bulk samples which are biased toward prior annotations and also lose the single cell resolution. CloudPred addresses these challenges via a novel end-to-end differentiable learning algorithm which is coupled with a biologically informed mixture of cell types model. CloudPred automatically infers the cell subpopulation that are salient for the phenotype without prior annotations. We developed a systematic simulation platform to evaluate the performance of CloudPred and several alternative methods we propose, and find that CloudPred outperforms the alternative methods across several settings. We further validated CloudPred on a real scRNA-seq dataset of 142 lupus patients and controls. CloudPred achieves AUROC of 0.98 while identifying a specific subpopulation of CD4 T cells whose presence is highly indicative of lupus. CloudPred is a powerful new framework to predict clinical phenotypes from scRNA-seq data and to identify relevant cells.


Assuntos
Biologia Computacional , Análise de Célula Única , Perfilação da Expressão Gênica , Humanos , Fenótipo , RNA-Seq , Análise de Sequência de RNA
11.
Nat Biotechnol ; 40(4): 476-479, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34845373

RESUMO

Current methods for spatial transcriptomics are limited by low spatial resolution. Here we introduce a method that integrates spatial gene expression data with histological image data from the same tissue section to infer higher-resolution expression maps. Using a deep generative model, our method characterizes the transcriptome of micrometer-scale anatomical features and can predict spatial gene expression from histology images alone.


Assuntos
Transcriptoma , Transcriptoma/genética
12.
EBioMedicine ; 73: 103613, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34656880

RESUMO

BACKGROUND: Laboratory testing is routinely used to assay blood biomarkers to provide information on physiologic state beyond what clinicians can evaluate from interpreting medical imaging. We hypothesized that deep learning interpretation of echocardiogram videos can provide additional value in understanding disease states and can evaluate common biomarkers results. METHODS: We developed EchoNet-Labs, a video-based deep learning algorithm to detect evidence of anemia, elevated B-type natriuretic peptide (BNP), troponin I, and blood urea nitrogen (BUN), as well as values of ten additional lab tests directly from echocardiograms. We included patients (n = 39,460) aged 18 years or older with one or more apical-4-chamber echocardiogram videos (n = 70,066) from Stanford Healthcare for training and internal testing of EchoNet-Lab's performance in estimating the most proximal biomarker result. Without fine-tuning, the performance of EchoNet-Labs was further evaluated on an additional external test dataset (n = 1,301) from Cedars-Sinai Medical Center. We calculated the area under the curve (AUC) of the receiver operating characteristic curve for the internal and external test datasets. FINDINGS: On the held-out test set of Stanford patients not previously seen during model training, EchoNet-Labs achieved an AUC of 0.80 (0.79-0.81) in detecting anemia (low hemoglobin), 0.86 (0.85-0.88) in detecting elevated BNP, 0.75 (0.73-0.78) in detecting elevated troponin I, and 0.74 (0.72-0.76) in detecting elevated BUN. On the external test dataset from Cedars-Sinai, EchoNet-Labs achieved an AUC of 0.80 (0.77-0.82) in detecting anemia, of 0.82 (0.79-0.84) in detecting elevated BNP, of 0.75 (0.72-0.78) in detecting elevated troponin I, and of 0.69 (0.66-0.71) in detecting elevated BUN. We further demonstrate the utility of the model in detecting abnormalities in 10 additional lab tests. We investigate the features necessary for EchoNet-Labs to make successful detection and identify potential mechanisms for each biomarker using well-known and novel explainability techniques. INTERPRETATION: These results show that deep learning applied to diagnostic imaging can provide additional clinical value and identify phenotypic information beyond current imaging interpretation methods. FUNDING: J.W.H. and B.H. are supported by the NSF Graduate Research Fellowship. D.O. is supported by NIH K99 HL157421-01. J.Y.Z. is supported by NSF CAREER 1942926, NIH R21 MD012867-01, NIH P30AG059307 and by a Chan-Zuckerberg Biohub Fellowship.


Assuntos
Biomarcadores , Aprendizado Profundo , Ecocardiografia , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Humanos , Curva ROC , Software
14.
Biochim Biophys Acta Rev Cancer ; 1875(2): 188515, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33513392

RESUMO

The large volume of data used in cancer diagnosis presents a unique opportunity for deep learning algorithms, which improve in predictive performance with increasing data. When applying deep learning to cancer diagnosis, the goal is often to learn how to classify an input sample (such as images or biomarkers) into predefined categories (such as benign or cancerous). In this article, we examine examples of how deep learning algorithms have been implemented to make predictions related to cancer diagnosis using clinical, radiological, and pathological image data. We present a systematic approach for evaluating the development and application of clinical deep learning algorithms. Based on these examples and the current state of deep learning in medicine, we discuss the future possibilities in this space and outline a roadmap for implementations of deep learning in cancer diagnosis.


Assuntos
Biologia Computacional/métodos , Neoplasias/diagnóstico , Algoritmos , Big Data , Aprendizado Profundo , Detecção Precoce de Câncer , Humanos , Aprendizado de Máquina , Neoplasias/patologia
15.
Nat Biomed Eng ; 4(8): 827-834, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32572199

RESUMO

Spatial transcriptomics allows for the measurement of RNA abundance at a high spatial resolution, making it possible to systematically link the morphology of cellular neighbourhoods and spatially localized gene expression. Here, we report the development of a deep learning algorithm for the prediction of local gene expression from haematoxylin-and-eosin-stained histopathology images using a new dataset of 30,612 spatially resolved gene expression data matched to histopathology images from 23 patients with breast cancer. We identified over 100 genes, including known breast cancer biomarkers of intratumoral heterogeneity and the co-localization of tumour growth and immune activation, the expression of which can be predicted from the histopathology images at a resolution of 100 µm. We also show that the algorithm generalizes well to The Cancer Genome Atlas and to other breast cancer gene expression datasets without the need for re-training. Predicting the spatially resolved transcriptome of a tissue directly from tissue images may enable image-based screening for molecular biomarkers with spatial variation.


Assuntos
Neoplasias da Mama/genética , Neoplasias da Mama/patologia , Aprendizado Profundo , Algoritmos , Biomarcadores Tumorais/genética , Biomarcadores Tumorais/metabolismo , Neoplasias da Mama/metabolismo , Feminino , Perfilação da Expressão Gênica/métodos , Humanos , Processamento de Imagem Assistida por Computador , Reprodutibilidade dos Testes , Transcriptoma
16.
Proc Natl Acad Sci U S A ; 117(17): 9284-9291, 2020 04 28.
Artigo em Inglês | MEDLINE | ID: mdl-32291335

RESUMO

Prior work finds a diversity paradox: Diversity breeds innovation, yet underrepresented groups that diversify organizations have less successful careers within them. Does the diversity paradox hold for scientists as well? We study this by utilizing a near-complete population of ∼1.2 million US doctoral recipients from 1977 to 2015 and following their careers into publishing and faculty positions. We use text analysis and machine learning to answer a series of questions: How do we detect scientific innovations? Are underrepresented groups more likely to generate scientific innovations? And are the innovations of underrepresented groups adopted and rewarded? Our analyses show that underrepresented groups produce higher rates of scientific novelty. However, their novel contributions are devalued and discounted: For example, novel contributions by gender and racial minorities are taken up by other scholars at lower rates than novel contributions by gender and racial majorities, and equally impactful contributions of gender and racial minorities are less likely to result in successful scientific careers than for majority groups. These results suggest there may be unwarranted reproduction of stratification in academic careers that discounts diversity's role in innovation and partly explains the underrepresentation of some groups in academia.


Assuntos
Invenções/tendências , Grupos Minoritários/educação , Grupos Minoritários/psicologia , Diversidade Cultural , Docentes , Feminino , Humanos , Masculino , Grupos Raciais/educação , Grupos Raciais/psicologia , Racismo/economia , Racismo/psicologia , Ciência , Comportamento Social
17.
Nature ; 580(7802): 252-256, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32269341

RESUMO

Accurate assessment of cardiac function is crucial for the diagnosis of cardiovascular disease1, screening for cardiotoxicity2 and decisions regarding the clinical management of patients with a critical illness3. However, human assessment of cardiac function focuses on a limited sampling of cardiac cycles and has considerable inter-observer variability despite years of training4,5. Here, to overcome this challenge, we present a video-based deep learning algorithm-EchoNet-Dynamic-that surpasses the performance of human experts in the critical tasks of segmenting the left ventricle, estimating ejection fraction and assessing cardiomyopathy. Trained on echocardiogram videos, our model accurately segments the left ventricle with a Dice similarity coefficient of 0.92, predicts ejection fraction with a mean absolute error of 4.1% and reliably classifies heart failure with reduced ejection fraction (area under the curve of 0.97). In an external dataset from another healthcare system, EchoNet-Dynamic predicts the ejection fraction with a mean absolute error of 6.0% and classifies heart failure with reduced ejection fraction with an area under the curve of 0.96. Prospective evaluation with repeated human measurements confirms that the model has variance that is comparable to or less than that of human experts. By leveraging information across multiple cardiac cycles, our model can rapidly identify subtle changes in ejection fraction, is more reproducible than human evaluation and lays the foundation for precise diagnosis of cardiovascular disease in real time. As a resource to promote further innovation, we also make publicly available a large dataset of 10,030 annotated echocardiogram videos.


Assuntos
Aprendizado Profundo , Cardiopatias/diagnóstico , Cardiopatias/fisiopatologia , Coração/fisiologia , Coração/fisiopatologia , Modelos Cardiovasculares , Gravação em Vídeo , Fibrilação Atrial , Conjuntos de Dados como Assunto , Ecocardiografia , Insuficiência Cardíaca/fisiopatologia , Hospitais , Humanos , Estudos Prospectivos , Reprodutibilidade dos Testes , Função Ventricular Esquerda/fisiologia
18.
NPJ Digit Med ; 3: 10, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31993508

RESUMO

Echocardiography uses ultrasound technology to capture high temporal and spatial resolution images of the heart and surrounding structures, and is the most common imaging modality in cardiovascular medicine. Using convolutional neural networks on a large new dataset, we show that deep learning applied to echocardiography can identify local cardiac structures, estimate cardiac function, and predict systemic phenotypes that modify cardiovascular risk but not readily identifiable to human interpretation. Our deep learning model, EchoNet, accurately identified the presence of pacemaker leads (AUC = 0.89), enlarged left atrium (AUC = 0.86), left ventricular hypertrophy (AUC = 0.75), left ventricular end systolic and diastolic volumes ( R 2 = 0.74 and R 2 = 0.70), and ejection fraction ( R 2 = 0.50), as well as predicted systemic phenotypes of age ( R 2 = 0.46), sex (AUC = 0.88), weight ( R 2 = 0.56), and height ( R 2 = 0.33). Interpretation analysis validates that EchoNet shows appropriate attention to key cardiac structures when performing human-explainable tasks and highlights hypothesis-generating regions of interest when predicting systemic phenotypes difficult for human interpretation. Machine learning on echocardiography images can streamline repetitive tasks in the clinical workflow, provide preliminary interpretation in areas with insufficient qualified cardiologists, and predict phenotypes challenging for human evaluation.

19.
Proc Mach Learn Res ; 84: 58-67, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-31187095

RESUMO

Principal component analysis (PCA) is one of the most powerful tools in machine learning. The simplest method for PCA, the power iteration, requires O ( 1 / Δ ) full-data passes to recover the principal component of a matrix with eigen-gap Δ. Lanczos, a significantly more complex method, achieves an accelerated rate of O ( 1 / Δ ) passes. Modern applications, however, motivate methods that only ingest a subset of available data, known as the stochastic setting. In the online stochastic setting, simple algorithms like Oja's iteration achieve the optimal sample complexity O ( σ 2 / Δ 2 ) . Unfortunately, they are fully sequential, and also require O ( σ 2 / Δ 2 ) iterations, far from the O ( 1 / Δ ) rate of Lanczos. We propose a simple variant of the power iteration with an added momentum term, that achieves both the optimal sample and iteration complexity. In the full-pass setting, standard analysis shows that momentum achieves the accelerated rate, O ( 1 / Δ ) . We demonstrate empirically that naively applying momentum to a stochastic method, does not result in acceleration. We perform a novel, tight variance analysis that reveals the "breaking-point variance" beyond which this acceleration does not occur. By combining this insight with modern variance reduction techniques, we construct stochastic PCA algorithms, for the online and offline setting, that achieve an accelerated iteration complexity O ( 1 / Δ ) . Due to the embarassingly parallel nature of our methods, this acceleration translates directly to wall-clock time if deployed in a parallel environment. Our approach is very general, and applies to many non-convex optimization problems that can now be accelerated using the same technique.

20.
Proc Mach Learn Res ; 70: 273-82, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-30882087

RESUMO

Curating labeled training data has become the primary bottleneck in machine learning. Recent frameworks address this bottleneck with generative models to synthesize labels at scale from weak supervision sources. The generative model's dependency structure directly affects the quality of the estimated labels, but selecting a structure automatically without any labeled data is a distinct challenge. We propose a structure estimation method that maximizes the ℓ 1-regularized marginal pseudolikelihood of the observed data. Our analysis shows that the amount of unlabeled data required to identify the true structure scales sublinearly in the number of possible dependencies for a broad class of models. Simulations show that our method is 100× faster than a maximum likelihood approach and selects 1/4 as many extraneous dependencies. We also show that our method provides an average of 1.5 F1 points of improvement over existing, user-developed information extraction applications on real-world data such as PubMed journal abstracts.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...